2 research outputs found

    Security Vulnerabilities of the Cisco IOS Implementation of the MPLS Transport Profile

    Get PDF
    We are interested in the security of the MPLS Transport Profile (MPLS-TP), in the context of smart-grid communication networks. The security guidelines of the MPLS-TP standards are written in a complex and indirect way, which led us to pose as hypothesis that vendor solutions might not implement them satisfactorily. To test this hypothesis, we investigated the Cisco implementation of two MPLS-TP OAM (Operations, Administration, and Maintenance) protocols: bidirectional forwarding detection (BFD), used to detect failures in label-switched paths (LSPs) and protection state coordination (PSC), used to coordinate protection switching. Critical smart grid applications, such as protection and control, rely on the protection switching feature controlled by BFD and PSC. We did find security issues with this implementation. We implemented a testbed with eight nodes that run the MPLS-TP enabled Cisco IOS; we demonstrated that an attacker who has access to only one cable (for two attacks) or two cables (for one attack) is able to harm the network at several points (e.g., disabling both working and protection LSPs). This occurred in spite of us implementing the security guidelines that are available from Cisco for IOS and MPLS-TP. The attacks use forged BFD or PSC messages, which induce a label-edge router (LER) into believing false information about an LSP. In one attack, the LER disables the operational LSP; in another attack, the LER continues to believe that a physically destroyed LSP is up and running; in yet another attack, both operational and backup LSPs are brought down. Our findings suggest that the MPLS-TP standard should be more explicit when it comes to security. For example, to thwart the attacks revealed here, it should mandate either hop by hop authentication (such as MACSec) at every node, or an ad-hoc authentication mechanism for BFD and PSC

    Trust Evaluation in the IoT Environment

    Get PDF
    Along with the many benefits of IoT, its heterogeneity brings a new challenge to establish a trustworthy environment among the objects due to the absence of proper enforcement mechanisms. Further, it can be observed that often these encounters are addressed only concerning the security and privacy matters involved. However, such common network security measures are not adequate to preserve the integrity of information and services exchanged over the internet. Hence, they remain vulnerable to threats ranging from the risks of data management at the cyber-physical layers, to the potential discrimination at the social layer. Therefore, trust in IoT can be considered as a key property to enforce trust among objects to guarantee trustworthy services. Typically, trust revolves around assurance and confidence that people, data, entities, information, or processes will function or behave in expected ways. However, trust enforcement in an artificial society like IoT is far more difficult, as the things do not have an inherited judgmental ability to assess risks and other influencing factors to evaluate trust as humans do. Hence, it is important to quantify the perception of trust such that it can be understood by the artificial agents. In computer science, trust is considered as a computational value depicted by a relationship between trustor and trustee, described in a specific context, measured by trust metrics, and evaluated by a mechanism. Several mechanisms about trust evaluation can be found in the literature. Among them, most of the work has deviated towards security and privacy issues instead of considering the universal meaning of trust and its dynamic nature. Furthermore, they lack a proper trust evaluation model and management platform that addresses all aspects of trust establishment. Hence, it is almost impossible to bring all these solutions to one place and develop a common platform that resolves end-to-end trust issues in a digital environment. Therefore, this thesis takes an attempt to fill these spaces through the following research work. First, this work proposes concrete definitions to formally identify trust as a computational concept and its characteristics. Next, a well-defined trust evaluation model is proposed to identify, evaluate and create trust relationships among objects for calculating trust. Then a trust management platform is presented identifying the major tasks of trust enforcement process including trust data collection, trust data management, trust information analysis, dissemination of trust information and trust information lifecycle management. Next, the thesis proposes several approaches to assess trust attributes and thereby the trust metrics of the above model for trust evaluation. Further, to minimize dependencies with human interactions in evaluating trust, an adaptive trust evaluation model is presented based on the machine learning techniques. From a standardization point of view, the scope of the current standards on network security and cybersecurity needs to be expanded to take trust issues into consideration. Hence, this thesis has provided several inputs towards standardization on trust, including a computational definition of trust, a trust evaluation model targeting both object and data trust, and platform to manage the trust evaluation process
    corecore